Goto

Collaborating Authors

 augmented data



Appendix 1 Back imagination and Back speech

Neural Information Processing Systems

Figure 1: The illustrative examples for two proposed techniques: Back-imagination and Back-speech. Tiny ImageNet [Le and Y ang, 2015] serves as a compact version of the comprehensive ImageNet dataset. The Stanford Sentiment Treebank-2 (SST -2) [Socher et al., 2013] is a sentiment classification dataset Given the scarcity of datasets for understanding natural language in visual scenes, we introduce a novel textual entailment dataset, named Textual Natural Contextual Classification (TNCC). This dataset is formulated on the foundation of Crisscrossed Captions [Parekh et al., 2020], an image In this work, we employ a uniform experimental configuration for both textual entailment and sentiment classification tasks. For the image classification task, we employ the ResNet18 [He et al., 2015] model, which is considered more suitable for small datasets.


A Proof of Theorem

Neural Information Processing Systems

In this section, we provide proof for the disentanglement identifiability of the inferred exogenous variable. Our proof consists of three main components. Then we have ( f, T, λ) ( f, T, λ) . The conditional V AE, in this case, inherits all the properties of maximum likelihood estimation. The following proof is based on the reduction to absurdity.





Appendix: LanguageandVisualEntityRelationship GraphforAgentNavigation

Neural Information Processing Systems

Directional features As in previous work [3, 6, 10], we apply a 128-dimensional directional encoding byreplicating(cosθi,sinθi,cosφi,sinφi)by32times torepresent theorientation ofeach single-viewiwith respect to the agent's current orientation, whereθi andφi are the angles of the heading and elevation to that single-view. Replicating the encoding by 32 times does not enrich its information but makes its gradient 32 times larger during back-propagation. We suspect that this benefits the agent to learn about the action-related terms (e.g.




Reinforcement Learning with Augmented Data

Neural Information Processing Systems

Learning from visual observations is a fundamental yet challenging problem in Reinforcement Learning (RL). Although algorithmic advances combined with convolutional neural networks have proved to be a recipe for success, current methods are still lacking on two fronts: (a) data-efficiency of learning and (b) generalization to new environments. To this end, we present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms. We perform the first extensive study of general data augmentations for RL on both pixel-based and state-based inputs, and introduce two new data augmentations - random translate and random amplitude scale. We show that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods across common benchmarks. RAD sets a new state-of-the-art in terms of data-efficiency and final performance on the DeepMind Control Suite benchmark for pixel-based control as well as OpenAI Gym benchmark for state-based control. We further demonstrate that RAD significantly improves test-time generalization over existing methods on several OpenAI ProcGen benchmarks.